首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   30632篇
  免费   564篇
  国内免费   221篇
测绘学   792篇
大气科学   2460篇
地球物理   6038篇
地质学   11227篇
海洋学   2590篇
天文学   6402篇
综合类   67篇
自然地理   1841篇
  2021年   287篇
  2020年   319篇
  2019年   317篇
  2018年   784篇
  2017年   771篇
  2016年   908篇
  2015年   526篇
  2014年   865篇
  2013年   1665篇
  2012年   960篇
  2011年   1277篇
  2010年   1110篇
  2009年   1420篇
  2008年   1225篇
  2007年   1218篇
  2006年   1211篇
  2005年   864篇
  2004年   860篇
  2003年   794篇
  2002年   792篇
  2001年   726篇
  2000年   697篇
  1999年   537篇
  1998年   510篇
  1997年   597篇
  1996年   460篇
  1995年   493篇
  1994年   485篇
  1993年   387篇
  1992年   392篇
  1991年   369篇
  1990年   375篇
  1989年   333篇
  1988年   344篇
  1987年   370篇
  1986年   326篇
  1985年   422篇
  1984年   423篇
  1983年   458篇
  1982年   423篇
  1981年   366篇
  1980年   406篇
  1979年   332篇
  1978年   315篇
  1977年   304篇
  1976年   271篇
  1975年   271篇
  1974年   279篇
  1973年   271篇
  1971年   182篇
排序方式: 共有10000条查询结果,搜索用时 26 毫秒
21.
Impact crater populations help us to understand solar system dynamics, planetary surface histories, and surface modification processes. A single previous effort to standardize how crater data are displayed in graphs, tables, and archives was in a 1978 NASA report by the Crater Analysis Techniques Working Group, published in 1979 in Icarus. The report had a significant lasting effect, but later decades brought major advances in statistical and computer sciences while the crater field has remained fairly stagnant. In this new work, we revisit the fundamental techniques for displaying and analyzing crater population data and demonstrate better statistical methods that can be used. Specifically, we address (1) how crater size-frequency distributions (SFDs) are constructed, (2) how error bars are assigned to SFDs, and (3) how SFDs are fit to power-laws and other models. We show how the new methods yield results similar to those of previous techniques in that the SFDs have familiar shapes but better account for multiple sources of uncertainty. We also recommend graphic, display, and archiving methods that reflect computers’ capabilities and fulfill NASA's current requirements for Data Management Plans.  相似文献   
22.
According to Ocean Re-Analysis System 3 (ORA-S3) data, all components of the annual mean heat budget of the upper quasi-homogeneous ocean layer (UQL) in the North Atlantic for the period of 1959–2011 have been calculated and errors of these estimates have been determined. It has been shown that the contribution of the horizontal eddy diffusivity (estimated as a residual term of the UQL heat balance equation) to changes in the UQL annual mean temperature is significantly overestimated. This takes place mainly due to neglecting the covariances of seasonal fluctuations of current velocity vector components and UQL temperature gradients in calculations carried out with the use of annual average values. These covariances play an important role in the annual mean heat budget in some regions of the North Atlantic, especially in tropical latitudes. Changes in the annual average UQL temperature in the central and eastern parts of the North Atlantic are significantly affected by errors related to an inaccuracy of estimates of annual average heat fluxes on the ocean surface. The maximum contribution of the horizontal eddy diffusivity to the interannual variability of the UQL temperature is observed in the northwestern part of the North Atlantic and the region of the Subpolar Gyre.  相似文献   
23.
The objective of the present paper is to derive a set of analytical equations that describe a swing-by maneuver realized in a system of primaries that are in elliptical orbits. The goal is to calculate the variations of energy, velocity and angular momentum as a function of the usual basic parameters that describe the swing-by maneuver, as done before for the case of circular orbits. In elliptical orbits the velocity of the secondary body is no longer constant, as in the circular case, but it varies with the position of the secondary body in its orbit. As a consequence, the variations of energy, velocity and angular momentum become functions of the magnitude and the angle between the velocity vector of the secondary body and the line connecting the primaries. The “patched-conics” approach is used to obtain these equations. The configurations that result in maximum gains and losses of energy for the spacecraft are shown next, and a comparison between the results obtained using the analytical equations and numerical simulations are made to validate the method developed here.  相似文献   
24.
25.

With an increasing demand for raw materials, predictive models that support successful mineral exploration targeting are of great importance. We evaluated different machine learning techniques with an emphasis on boosting algorithms and implemented them in an ArcGIS toolbox. Performance was tested on an exploration dataset from the Iberian Pyrite Belt (IPB) with respect to accuracy, performance, stability, and robustness. Boosting algorithms are ensemble methods used in supervised learning for regression and classification. They combine weak classifiers, i.e., classifiers that perform slightly better than random guessing to obtain robust classifiers. Each time a weak learner is added; the learning set is reweighted to give more importance to misclassified samples. Our test area, the IPB, is one of the oldest mining districts in the world and hosts giant volcanic-hosted massive sulfide (VMS) deposits. The spatial density of ore deposits, as well as the size and tonnage, makes the area unique, and due to the high data availability and number of known deposits, well-suited for testing machine learning algorithms. We combined several geophysical datasets, as well as layers derived from geological maps as predictors of the presence or absence of VMS deposits. Boosting algorithms such as BrownBoost and Adaboost were tested and compared to Logistic Regression (LR), Random Forests (RF) and Support Vector machines (SVM) in several experiments. We found performance results relatively similar, especially to BrownBoost, which slightly outperformed LR and SVM with respective accuracies of 0.96 compared to 0.89 and 0.93. Data augmentation by perturbing deposit location led to a 7% improvement in results. Variations in the split ratio of training and test data led to a reduction in the accuracy of the prediction result with relative stability occurring at a critical point at around 26 training samples out of 130 total samples. When lower numbers of training data were introduced accuracy dropped significantly. In comparison with other machine learning methods, Adaboost is user-friendly due to relatively short training and prediction times, the low likelihood of overfitting and the reduced number of hyperparameters for optimization. Boosting algorithms gave high predictive accuracies, making them a potential data-driven alternative for regional scale and/or brownfields mineral exploration.

  相似文献   
26.
Solar System Research - Abstract—Results of the implementation of a single concept for the development of the double launch system (SDZ) are presented, in particular the SDZ-La5 for the...  相似文献   
27.
The aim of this paper is to formulate a micromechanics‐based approach to non‐aging viscoelastic behavior of materials with randomly distributed micro‐fractures. Unlike cracks, fractures are discontinuities that are able to transfer stresses and can therefore be regarded from a mechanical viewpoint as interfaces endowed with a specific behavior under normal and shear loading. Making use of the elastic‐viscoelastic correspondence principle together with a Mori‐Tanka homogenization scheme, the effective viscoelastic behavior is assessed from properties of the material constituents and damage parameters related to density and size of fractures. It is notably shown that the homogenized behavior thus formulated can be described in most cases by means of a generalized Maxwell rheological model. For practical implementation in structural analyses, an approximate model for the isotropic homogenized fractured medium is formulated within the class of Burger models. Although the approximation is basically developed for short‐term and long‐term behaviors, numerical applications indicate that the approximate Burger model accurately reproduce the homogenized viscoelastic behavior also in the transient conditions.  相似文献   
28.
29.
Local glaciers and ice caps (GICs) comprise only ~5.4% of the total ice volume, but account for ~14–20% of the current ice loss in Greenland. The glacial history of GICs is not well constrained, however, and little is known about how they reacted to Holocene climate changes. Specifically, in North Greenland, there is limited knowledge about past GIC fluctuations and whether they survived the Holocene Thermal Maximum (HTM, ~8 to 5 ka). In this study, we use proglacial lake records to constrain the ice‐marginal fluctuations of three local ice caps in North Greenland including Flade Isblink, the largest ice cap in Greenland. Additionally, we have radiocarbon dated reworked marine molluscs in Little Ice Age (LIA) moraines adjacent to the Flade Isblink, which reveal when the ice cap was smaller than present. We found that outlet glaciers from Flade Isblink retreated inland of their present extent from ~9.4 to 0.2 cal. ka BP. The proglacial lake records, however, demonstrate that the lakes continued to receive glacial meltwater throughout the entire Holocene. This implies that GICs in Finderup Land survived the HTM. Our results are consistent with other observations from North Greenland but differ from locations in southern Greenland where all records show that the local ice caps at low and intermediate elevations disappeared completely during the HTM. We explain the north–south gradient in glacier response as a result of sensitivity to increased temperature and precipitation. While the increased temperatures during the HTM led to a complete melting of GICs in southern Greenland, GICs remained in North Greenland probably because the melting was counterbalanced by increased precipitation due to a reduction in Arctic sea‐ice extent and/or increased poleward moisture transport.  相似文献   
30.
Manually collected snow data are often considered as ground truth for many applications such as climatological or hydrological studies. However, there are many sources of uncertainty that are not quantified in detail. For the determination of water equivalent of snow cover (SWE), different snow core samplers and scales are used, but they are all based on the same measurement principle. We conducted two field campaigns with 9 samplers commonly used in observational measurements and research in Europe and northern America to better quantify uncertainties when measuring depth, density and SWE with core samplers. During the first campaign, as a first approach to distinguish snow variability measured at the plot and at the point scale, repeated measurements were taken along two 20 m long snow pits. The results revealed a much higher variability of SWE at the plot scale (resulting from both natural variability and instrumental bias) compared to repeated measurements at the same spot (resulting mostly from error induced by observers or very small scale variability of snow depth). The exceptionally homogeneous snowpack found in the second campaign permitted to almost neglect the natural variability of the snowpack properties and focus on the separation between instrumental bias and error induced by observers. Reported uncertainties refer to a shallow, homogeneous tundra-taiga snowpack less than 1 m deep (loose, mostly recrystallised snow and no wind impact). Under such measurement conditions, the uncertainty in bulk snow density estimation is about 5% for an individual instrument and is close to 10% among different instruments. Results confirmed that instrumental bias exceeded both the natural variability and the error induced by observers, even in the case when observers were not familiar with a given snow core sampler.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号